Home Mathematics Inverse problems with learned forward operators
Chapter
Licensed
Unlicensed Requires Authentication

Inverse problems with learned forward operators

  • Simon Arridge , Andreas Hauptmann and Yury Korolev
Become an author with De Gruyter Brill
Data-driven Models in Inverse Problems
This chapter is in the book Data-driven Models in Inverse Problems

Abstract

Solving inverse problems requires the knowledge of the forward operator, but accurate models can be computationally expensive, and hence cheaper variants that do not compromise the reconstruction quality are desired. This chapter reviews reconstruction methods in inverse problems with learned forward operators that follow two different paradigms. The first one is completely agnostic to the forward operator and learns its restriction to the subspace spanned by the training data. The framework of regularization by projection is then used to find a reconstruction. The second one uses a simplified model of the physics of the measurement process and only relies on the training data to learn a model correction. We present the theory of these two approaches and compare them numerically. A common theme emerges: both methods require, or at least benefit from, training data not only for the forward operator, but also for its adjoint.

Abstract

Solving inverse problems requires the knowledge of the forward operator, but accurate models can be computationally expensive, and hence cheaper variants that do not compromise the reconstruction quality are desired. This chapter reviews reconstruction methods in inverse problems with learned forward operators that follow two different paradigms. The first one is completely agnostic to the forward operator and learns its restriction to the subspace spanned by the training data. The framework of regularization by projection is then used to find a reconstruction. The second one uses a simplified model of the physics of the measurement process and only relies on the training data to learn a model correction. We present the theory of these two approaches and compare them numerically. A common theme emerges: both methods require, or at least benefit from, training data not only for the forward operator, but also for its adjoint.

Downloaded on 12.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/9783111251233-003/html
Scroll to top button